5 research outputs found

    Vehicle detection in remote sensing images leveraging on simultaneous super-resolution

    No full text
    Owing to the relatively small size of vehicles in remote sensing images, lacking sufficient detailed appearance to distinguish vehicles from similar objects, the detection performance is still far from satisfactory compared with the detection results on everyday images. Inspired by the positive effects of super-resolution convolutional neural network (SRCNN) for object detection and the stunning success of deep CNN techniques, we apply generative adversarial network frameworks to realize simultaneous SRCNN and vehicle detection in an end-to-end manner, and the detection loss is backpropagated into the SRCNN during training to facilitate detection. In particular, our work is unsupervised and bypasses the requirement of low-/high-resolution image pairs during the training stage, achieving increased generality and applicability. Extensive experiments on representative data sets demonstrate that our method outperforms the state-of-the-art detectors. (The source code will be made available after the review process)

    EOVNet : earth-observation image-based vehicle detection network

    No full text
    Vehicle detection from earth-observation (EO) image has been attracting remarkable attention for its critical value in a variety of applications. Encouraged by the stunning success of deep learning techniques based on convolutional neural networks (CNNs), which have revolutionized the visual data processing community and obtained the state-of-the-art performance in a variety of classification and recognition tasks on benchmark datasets, we propose a network, called EOVNet (EO image-based vehicle detection network), to bridge the gap between the advanced deep learning research of object detection and the specific task of vehicle detection in EO images. Our network has integrated nearly all advanced techniques including very deep residual networks for feature extraction, feature pyramid to fuse multiscale features, network for proposal generation with feature sharing, and hard example mining. Moreover, our novel designs of probability-based localization and homography-based data augmentation have been investigated, resulting in further improvement of the detection performance. For performance evaluation, we have collected nearly all the representative EO datasets associated with vehicle detection. Extensive experiments on the representative datasets demonstrate that our method outperforms the state-of-the-art object detection approach Faster R-CNN++ (which is based on the Faster R-CNN framework, but with significant improvement) with 5% average precision improvement. The source code will be made available after the review process

    A two-stage density-aware single image deraining method

    No full text
    Although advanced single image deraining methods have been proposed, one main challenge remains: the available methods usually perform well on specific rain patterns but can hardly deal with scenarios with dramatically different rain densities, especially when the impacts of rain streaks and the veiling effect caused by rain accumulation are heavily coupled. To tackle this challenge, we propose a two-stage density-aware single image deraining method with gated multi-scale feature fusion. In the first stage, a realistic physics model closer to real rain scenes is leveraged for initial deraining, and a network branch is also trained for rain density estimation to guide the subsequent refinement. The second stage of model-independent refinement is realized using conditional Generative Adversarial Network (cGAN), aiming to eliminate artifacts and improve the restoration quality. In particular, dilated convolutions are applied to extract rain features at multiple scales and gated feature fusion is exploited to better aggregate multi-level contextual information in both stages. Extensive experiments have been conducted on representative synthetic rain datasets and real rain scenes. Quantitative and qualitative results demonstrate the superiority of our method in terms of effectiveness and generalization ability, which outperforms the state-of-the-art
    corecore